Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Public Health ; 12: 1358043, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660351

RESUMO

Introduction: Suicide death remains a significantly rarer event among Latina/o/x populations compared to non-Latina/o/x populations. However, the reasons why Latina/o/x communities experience relatively lower suicide rates are not fully understood. Critical gaps exist in the examination of Latina/o/x suicide death, especially in rural settings, where suicide death by firearm is historically more common within non-Latina/o/x populations. Method: We tested whether the prevalence of Latina/o/x firearm suicide was meaningfully different in urban and rural environments and from non-Latino/a/x decedents when controlling for age, sex, and a social deprivation metric, the Area Deprivation Index. Suicide death data used in this analysis encompasses 2,989 suicide decedents ascertained in Utah from 2016 to 2019. This included death certificate data from the Utah Office of the Medical Examiner on all Utah suicide deaths linked to information by staff at the Utah Population Database. Results: Compared to non-Latina/o/x suicide decedents, Latina/o/x suicide decedents had 34.7% lower adjusted odds of dying by firearm. Additionally, among the firearm suicide decedents living only in rural counties, Latina/o/x decedents had 40.5% lower adjusted odds of dying by firearm compared to non-Latina/o/x suicide decedents. Discussion: The likelihood of firearm suicide death in Utah differed by ethnicity, even in rural populations. Our findings may suggest underlying factors contributing to lower firearm suicide rates within Latina/o/x populations, e.g., aversion to firearms or less access to firearms, especially in rural areas, though additional research on these phenomena is needed.


Assuntos
Armas de Fogo , Hispânico ou Latino , População Rural , Suicídio , Humanos , Feminino , Utah/epidemiologia , Masculino , Hispânico ou Latino/estatística & dados numéricos , População Rural/estatística & dados numéricos , Adulto , Pessoa de Meia-Idade , Armas de Fogo/estatística & dados numéricos , Suicídio/estatística & dados numéricos , Idoso , Adolescente , Adulto Jovem , População Urbana/estatística & dados numéricos , Médicos Legistas/estatística & dados numéricos , Prevalência
2.
Pediatr Nephrol ; 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38051389

RESUMO

BACKGROUND: Hemodialysis is a life-saving technology used during periods of acute or chronic kidney failure to remove toxins, and maintain fluid, electrolyte and metabolic balance. While this technology plays an important role for pediatric patients with kidney dysfunction, it can alter the pharmacokinetic behavior of medications placing patients at risk for suboptimal dosing and drug toxicity. The ability to directly translate pharmacokinetic alterations into dosing recommendations has thus far been limited and dosing guidance specific to pediatric hemodialysis patients is rare. Despite differences in dialysis prescription and patient populations, intermittent (iHD) and continuous kidney replacement therapy (CKRT) patients are often pooled together. In order to develop evidence-based dosing guidelines, it is important to first prioritize drugs for study in each modality. METHODS: Here we aim to identify priority drugs in two hemodialysis modalities, through: 1) Identification of hospitalized, pediatric patients who received CKRT or intermittent hemodialysis (iHD) using a machine learning-based predictive model based on medications; 2) Identification of medication administration patterns in these patient cohorts; and 3) Identification of the most commonly prescribed drugs that lack published dosing guidance. RESULTS: Notable differences were found in the pattern of medications and drug dosing guidance between iHD and CKRT patients. Antibiotics, diuretics and sedatives were more common in CKRT patients. Out of the 50 most commonly administered medications in the two modalities, only 34% and 28% had dosing guidance present for iHD and CKRT, respectively. CONCLUSIONS: Our results add to the understanding of the differences between iHD and CKRT patient populations by identifying commonly used medications that lack dosing guidance for each hemodialysis modality, helping to pinpoint priority medications for further study. Overall, this study provides an overview of the current limitations in medication use in this at-risk population, and provides a framework for future studies by identifying commonly used medications in pediatric CKRT and iHD patients. A higher resolution version of the Graphical abstract is available as Supplementary information.

3.
Artigo em Inglês | MEDLINE | ID: mdl-36405250

RESUMO

Electronic health records (EHRs) have given rise to large and complex databases of medical information that have the potential to become powerful tools for clinical research. However, differences in coding systems and the detail and accuracy of the information within EHRs can vary across institutions. This makes it challenging to identify subpopulations of patients and limits the widespread use of multi-institutional databases. In this study, we leveraged machine learning to identify patterns in medication usage among hospitalized pediatric patients receiving renal replacement therapy and created a predictive model that successfully differentiated between intermittent (iHD) and continuous renal replacement therapy (CRRT) hemodialysis patients. We trained six machine learning algorithms (logistical regression, Naïve Bayes, k-nearest neighbor, support vector machine, random forest, and gradient boosted trees) using patient records from a multi-center database (n = 533) and prescribed medication ingredients (n = 228) as features to discriminate between the two hemodialysis types. Predictive skill was assessed using a 5-fold cross-validation, and the algorithms showed a range of performance from 0.7 balanced accuracy (logistical regression) to 0.86 (random forest). The two best performing models were further tested using an independent single-center dataset and achieved 84-87% balanced accuracy. This model overcomes issues inherent within large databases and will allow us to utilize and combine historical records, significantly increasing population size and diversity within both iHD and CRRT populations for future clinical studies. Our work demonstrates the utility of using medications alone to accurately differentiate subpopulations of patients in large datasets, allowing codes to be transferred between different coding systems. This framework has the potential to be used to distinguish other subpopulations of patients where discriminatory ICD codes are not available, permitting more detailed insights and new lines of research.

4.
Sci Rep ; 12(1): 2026, 2022 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-35132100

RESUMO

Explaining the factors that influence past dietary variation is critically important for understanding changes in subsistence, health, and status in past societies; yet systematic studies comparing possible driving factors remain scarce. Here we compile the largest dataset of past diet derived from stable isotope δ13C‰ and δ15N‰ values in the Americas to quantitatively evaluate the impact of 7000 years of climatic and demographic change on dietary variation in the Central Andes. Specifically, we couple paleoclimatic data from a general circulation model with estimates of relative past population inferred from archaeologically derived radiocarbon dates to assess the influence of climate and population on spatiotemporal dietary variation using an ensemble machine learning model capable of accounting for interactions among predictors. Results reveal that climate and population strongly predict diet (80% of δ15N‰ and 66% of Î´13C‰) and that Central Andean diets correlate much more strongly with local climatic conditions than regional population size, indicating that the past 7000 years of dietary change was influenced more by climatic than socio-demographic processes. Visually, the temporal pattern suggests decreasing dietary variation across elevation zones during the Late Horizon, raising the possibility that sociopolitical factors overrode the influence of local climatic conditions on diet during that time. The overall findings and approach establish a general framework for understanding the influence of local climate and demography on dietary change across human history.

5.
Sci Adv ; 7(23)2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34088663

RESUMO

When a peatland is drained and cultivated, it behaves as a notable source of CO2 However, we lack temporally and spatially explicit estimates of carbon losses from cultivated peatlands. Using a process-based land surface model that explicitly includes representation of peatland processes, we estimate that northern peatlands converted to croplands emitted 72 Pg C over 850-2010, with 45% of this source having occurred before 1750. This source surpassed the carbon accumulation by high-latitude undisturbed peatlands (36 to 47 Pg C). Carbon losses from the cultivation of northern peatlands are omitted in previous land-use emission assessments. Adding this ignored historical land-use emission implies an 18% larger terrestrial carbon storage since 1750 to close the historical global carbon budget. We also show that carbon emission per unit area decrease with time since drainage, suggesting that time since drainage should be accounted for in inventories to refine land-use emissions from cultivated peatlands.

6.
PLoS One ; 15(10): e0239424, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33002016

RESUMO

Predictive models are central to both archaeological research and cultural resource management. Yet, archaeological applications of predictive models are often insufficient due to small training data sets, inadequate statistical techniques, and a lack of theoretical insight to explain the responses of past land use to predictor variables. Here we address these critiques and evaluate the predictive power of four statistical approaches widely used in ecological modeling-generalized linear models, generalized additive models, maximum entropy, and random forests-to predict the locations of Formative Period (2100-650 BP) archaeological sites in the Grand Staircase-Escalante National Monument. We assess each modeling approach using a threshold-independent measure, the area under the curve (AUC), and threshold-dependent measures, like the true skill statistic. We find that the majority of the modeling approaches struggle with archaeological datasets due to the frequent lack of true-absence locations, which violates model assumptions of generalized linear models, generalized additive models, and random forests, as well as measures of their predictive power (AUC). Maximum entropy is the only method tested here which is capable of utilizing pseudo-absence points (inferred absence data based on known presence data) and controlling for a non-representative sampling of the landscape, thus making maximum entropy the best modeling approach for common archaeological data when the goal is prediction. Regression-based approaches may be more applicable when prediction is not the goal, given their grounding in well-established statistical theory. Random forests, while the most powerful, is not applicable to archaeological data except in the rare case where true-absence data exist. Our results have significant implications for the application of predictive models by archaeologists for research and conservation purposes and highlight the importance of understanding model assumptions.


Assuntos
Arqueologia , Aprendizado de Máquina , Modelos Estatísticos , Área Sob a Curva , Análise de Regressão
7.
Artigo em Inglês | MEDLINE | ID: mdl-28872595

RESUMO

Few studies of walkability include both perceived and audited walkability measures. We examined perceived walkability (Neighborhood Environment Walkability Scale-Abbreviated, NEWS-A) and audited walkability (Irvine-Minnesota Inventory, IMI) measures for residents living within 2 km of a "complete street"-one renovated with light rail, bike lanes, and sidewalks. For perceived walkability, we found some differences but substantial similarity between our final scales and those in a prior published confirmatory factor analysis. Perceived walkability, in interaction with distance, was related to complete street active transportation. Residents were likely to have active transportation on the street when they lived nearby and perceived good aesthetics, crime safety, and traffic safety. Audited walkability, analyzed with decision trees, showed three general clusters of walkability areas, with 12 specific subtypes. A subset of walkability items (n = 11), including sidewalks, zebra-striped crosswalks, decorative sidewalks, pedestrian signals, and blank walls combined to cluster street segments. The 12 subtypes yielded 81% correct classification of residents' active transportation. Both perceived and audited walkability were important predictors of active transportation. For audited walkability, we recommend more exploration of decision tree approaches, given their predictive utility and ease of translation into walkability interventions.


Assuntos
Planejamento Ambiental , Características de Residência , Meios de Transporte , Caminhada , Cidades , Humanos , Minnesota
8.
Intensive Care Med ; 41(5): 814-22, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25851384

RESUMO

INTRODUCTION: Sepsis is a devastating condition that is generally treated as a single disease. Identification of meaningfully distinct clusters may improve research, treatment and prognostication among septic patients. We therefore sought to identify clusters among patients with severe sepsis or septic shock. METHODS: We retrospectively studied all patients with severe sepsis or septic shock admitted directly from the emergency department to the intensive care units (ICUs) of three hospitals, 2006-2013. Using age and Sequential Organ Failure Assessment (SOFA) subscores, we defined clusters utilizing self-organizing maps, a method for representing multidimensional data in intuitive two-dimensional grids to facilitate cluster identification. RESULTS: We identified 2533 patients with severe sepsis or septic shock. Overall mortality was 17 %, with a mean APACHE II score of 24, mean SOFA score of 8 and a mean ICU stay of 5.4 days. Four distinct clusters were identified; (1) shock with elevated creatinine, (2) minimal multi-organ dysfunction syndrome (MODS), (3) shock with hypoxemia and altered mental status, and (4) hepatic disease. Mortality (95 % confidence intervals) for these clusters was 11 (8-14), 12 (11-14), 28 (25-32), and 21 (16-26) %, respectively (p < 0.0001). Regression modeling demonstrated that the clusters differed in the association between clinical outcomes and predictors, including APACHE II score. CONCLUSIONS: We identified four distinct clusters of MODS among patients with severe sepsis or septic shock. These clusters may reflect underlying pathophysiological differences and could potentially facilitate tailored treatments or directed research.


Assuntos
Insuficiência de Múltiplos Órgãos/diagnóstico , Insuficiência de Múltiplos Órgãos/mortalidade , Fenótipo , Sepse/diagnóstico , Sepse/mortalidade , Choque Séptico/diagnóstico , Choque Séptico/mortalidade , APACHE , Idoso , Feminino , Mortalidade Hospitalar , Humanos , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Estudos Retrospectivos , Sepse/epidemiologia , Utah/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA